2 research outputs found

    Towards a Framework for Embodying Any-Body through Sensory Translation and Proprioceptive Remapping: A Pilot Study

    Get PDF
    We address the problem of physical avatar embodiment and investi- gate the most general factors that may allow a person to “wear” an- other body, different from her own. A general approach is required to exploit the fact that an avatar can have any kind of body. With this pilot study we introduce a conceptual framework for the design of non-anthropomorphic embodiment, to foster immersion and user engagement. The person is interfaced with the avatar, a robot, through a system that induces a divergent internal sensorimotor mapping while controlling the avatar, to create an immersive expe- rience. Together with the conceptual framework, we present two implementations: a prototype tested in the lab and an interactive in- stallation exhibited to general public. These implementations consist of a wheeled robot, and control and sensory feedback systems. The control system includes mechanisms that both detect and resist the user’s movement, increasing the sense of connection with the avatar; the feedback system is a virtual reality (VR) environment represent- ing the avatar’s unique perception, combining sensor and control in- formation to generate visual cues. Data gathered from users indicate that the systems implemented following the proposed framework create a challenging and engaging experience, thus providing solid ground for further developments

    Policy Feedback in Deep Reinforcement Learning to Exploit Expert Knowledge

    Get PDF
    In Deep Reinforcement Learning (DRL), agents learn by sampling transitions from a batch of stored data called Experience Replay. In most DRL algorithms, the Experience Replay is filled by experiences gathered by the learning agent itself. However, agents that are trained completely Off-Policy, based on experiences gathered by behaviors that are completely decoupled from their own, cannot learn to improve their own policies. In general, the more algorithms train agents Off-Policy, the more they become prone to divergence. The main contribution of this research is the proposal of a novel learning framework called Policy Feedback, used both as a tool to leverage offline-collected expert experiences, and also as a general framework to improve the understanding of the issues behind Off-Policy Learning
    corecore